Robots have been brought to work close to humans in many scenarios. For coexistence and collaboration, robots should be safe and pleasant for humans to interact with. To this end, the robots could be both physically soft with multimodal sensing/perception, so that the robots could have better awareness of the surrounding environment, as well as to respond properly to humans' action/intention. This paper introduces a novel soft robotic link, named ProTac, that possesses multiple sensing modes: tactile and proximity sensing, based on computer vision and a functional material. These modalities come from a layered structure of a soft transparent silicon skin, a polymer dispersed liquid crystal (PDLC) film, and reflective markers. Here, the PDLC film can switch actively between the opaque and the transparent state, from which the tactile sensing and proximity sensing can be obtained by using cameras solely built inside the ProTac link. In this paper, inference algorithms for tactile proximity perception are introduced. Evaluation results of two sensing modalities demonstrated that, with a simple activation strategy, ProTac link could effectively perceive useful information from both approaching and in-contact obstacles. The proposed sensing device is expected to bring in ultimate solutions for design of robots with softness, whole-body and multimodal sensing, and safety control strategies.
translated by 谷歌翻译
本文旨在解决多个对象跟踪(MOT),这是计算机视觉中的一个重要问题,但由于许多实际问题,尤其是阻塞,因此仍然具有挑战性。确实,我们提出了一种新的实时深度透视图 - 了解多个对象跟踪(DP-MOT)方法,以解决MOT中的闭塞问题。首先提出了一个简单但有效的主题深度估计(SODE),以在2D场景中自动以无监督的方式自动订购检测到的受试者的深度位置。使用SODE的输出,提出了一个新的活动伪3D KALMAN滤波器,即具有动态控制变量的Kalman滤波器的简单但有效的扩展,以动态更新对象的运动。此外,在数据关联步骤中提出了一种新的高阶关联方法,以合并检测到的对象之间的一阶和二阶关系。与标准MOT基准的最新MOT方法相比,提出的方法始终达到最先进的性能。
translated by 谷歌翻译
Event Detection (ED) is the task of identifying and classifying trigger words of event mentions in text. Despite considerable research efforts in recent years for English text, the task of ED in other languages has been significantly less explored. Switching to non-English languages, important research questions for ED include how well existing ED models perform on different languages, how challenging ED is in other languages, and how well ED knowledge and annotation can be transferred across languages. To answer those questions, it is crucial to obtain multilingual ED datasets that provide consistent event annotation for multiple languages. There exist some multilingual ED datasets; however, they tend to cover a handful of languages and mainly focus on popular ones. Many languages are not covered in existing multilingual ED datasets. In addition, the current datasets are often small and not accessible to the public. To overcome those shortcomings, we introduce a new large-scale multilingual dataset for ED (called MINION) that consistently annotates events for 8 different languages; 5 of them have not been supported by existing multilingual datasets. We also perform extensive experiments and analysis to demonstrate the challenges and transferability of ED across languages in MINION that in all call for more research effort in this area.
translated by 谷歌翻译
Event Extraction (EE) is one of the fundamental tasks in Information Extraction (IE) that aims to recognize event mentions and their arguments (i.e., participants) from text. Due to its importance, extensive methods and resources have been developed for Event Extraction. However, one limitation of current research for EE involves the under-exploration for non-English languages in which the lack of high-quality multilingual EE datasets for model training and evaluation has been the main hindrance. To address this limitation, we propose a novel Multilingual Event Extraction dataset (MEE) that provides annotation for more than 50K event mentions in 8 typologically different languages. MEE comprehensively annotates data for entity mentions, event triggers and event arguments. We conduct extensive experiments on the proposed dataset to reveal challenges and opportunities for multilingual EE.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
流视频是创作者与观众分享创意作品的方法之一。在这些视频中,流媒体分享了如何通过在一个或几个用于创意项目的程序中使用各种工具来实现最终目标。为此,可以讨论实现最终目标所需的步骤。因此,这些视频可以提供大量的教育内容,这些内容可用于学习如何使用流媒体使用的工具。但是,缺点之一是,流媒体可能无法为每个步骤提供足够的详细信息。因此,对于学习者来说,可能很难赶上所有步骤。为了减轻此问题,一种解决方案是将流视频与流视频中使用的工具可用的相关教程联系起来。更具体地说,系统可以分析实时流媒体视频的内容,并推荐最相关的教程。由于现有的文档推荐模型无法处理这种情况,因此在这项工作中,我们为实时流程视频的教程建议提供了一个新颖的数据集和模型。我们对拟议的数据集和模型进行了广泛的分析,揭示了该任务的挑战性质。
translated by 谷歌翻译
键形提取是NLP中文档理解的重要任务之一。虽然大多数先前的作品都致力于正式设置,例如书籍,新闻或网络博客,但探索视频成绩单等非正式文本的探索较少。为了解决这一局限性,在这项工作中,我们提出了一种新颖的语料库和方法,用于从Behance平台上流的视频的成绩单中提取钥匙短语。更具体地说,在这项工作中,提出了一种新型的数据增强,以通过从其他域中提取键形提取任务的背景知识来丰富模型。提出的数据集数据集上的广泛实验显示了引入方法的有效性。
translated by 谷歌翻译
本文讨论了面部表达识别模型和描述生成模型,以构建图像中人的图像和面部表情的描述性句子。我们的研究表明,Yolov5比传统的CNN获得了KDEF数据集的所有情绪的更好结果。特别是,CNN和Yolov5模型的精度分别为0.853和0.938。使用VGG16与LSTM模型编码的描述提出了用于基于合并体系结构的图像描述的模型。 Yolov5还用于识别图像中对象的主要颜色,并在必要时纠正生成的描述中的颜色单词。如果描述包含指称一个人的单词,我们会认识到图像中人的情感。最后,我们结合了所有模型的结果,以创建描述图像中视觉内容和人类情感的句子。越南语中FlickR8K数据集的实验结果实现了BLLEU-1,BLEU-2,BLEU-3,BLEU-4分数为0.628; 0.425; 0.280;和0.174。
translated by 谷歌翻译
包含丰富信息的元素图像和视频需要大量的数据存储和高传输成本。虽然对元素图像编码进行了很多研究,但对元素视频编码的研究非常有限。我们通过查看射线空间域中的问题而不是在常规像素域中的问题来研究元素视频编码的运动补偿。在这里,我们在射线空间运动的两个子轴上,即整数射线空间运动和分数射线空间运动,为Lenslet视频开发了一种新颖的运动补偿方案。拟议的新方案设计了光场运动补偿预测,使其可以轻松地集成到众所周知的视频编码技术中,例如HEVC。与现有方法相比,实验结果显示出显着的压缩效率,平均增益为19.63%,峰值增长率为29.1%。
translated by 谷歌翻译
我们介绍了MLPERF小型推理基准(FPGA)平台上MLPERF微小的推理基准的最新结果。我们使用开源HLS4ML和Finn工作流,旨在使FPGA中优化神经网络的AI硬件代码民主化。我们介绍关键字发现,异常检测和图像分类基准任务的设计和实现过程。最终的硬件实现是针对速度和效率量身定制的,可配置的,可配置的空间数据流体系结构,并引入了新的通用优化和作为本工作的一部分开发的常见工作流程。完整的工作流程从量化感知培训到FPGA实施。该解决方案部署在芯片(PYNQ-Z2)和纯FPGA(ARTY A7-100T)平台上。由此产生的提交的潜伏期低至20 $ \ mu $ s和每次推论的低至30 $ \ mu $ j的能耗。我们展示了异质硬件平台上新兴的ML基准如何催化协作和开发新技术和更容易访问的工具。
translated by 谷歌翻译